如今,视觉变压器是图像分类任务的事实上的偏好。分类任务有两种类别,即细粒度和粗粒。在细粒度的分类中,由于子类之间的相似性高度相似,因此必须发现细微的差异。当我们降低图像以节省与视觉变压器(VIT)相关的计算成本时,这种区别通常会丢失。在这项工作中,我们介绍了深入的分析,并描述了开发用于从标本室纸的细粒度分类系统的关键组成部分。我们广泛的实验分析表明,需要更好的增强技术以及现代神经网络处理更高维图像的能力。我们还介绍了一个称为“ Anciformer”的卷积变压器体系结构,该体系结构与流行的视觉变压器(Convit)不同,可以处理更高的分辨率图像,而不会爆炸记忆和计算成本。我们还介绍了一种新颖的,改进的预处理技术,称为Presizer,以更好地调整图像大小,同时保留其原始纵横比,这对于对天然植物进行分类至关重要。借助我们简单而有效的方法,我们在202X和Inatorist 2019数据集上实现了SOTA。
translated by 谷歌翻译
人类视野的一个基本组成部分是我们解析复杂的视觉场景并判断其组成物体之间的关系的能力。近年来,随着最先进的系统在其中一些基准上达到人类的准确性,近年来,视觉推理的AI基准驱动了快速进步。然而,就样本效率而言,人类和AI系统学习新的视觉推理任务的样本效率仍然存在。人类在学习方面的非凡效率至少部分归因于其利用组成性的能力,以便他们可以在学习新任务时有效利用先前获得的知识。在这里,我们介绍了一种新颖的视觉推理基准组成视觉关系(CVR),以推动发展更多数据有效学习算法的进步。我们从流体智能和非语言推理测试中汲取灵感,并描述一种新的方法,用于创建抽象规则和相关图像数据集的组成。我们提出的基准包括跨任务规则的样本效率,概括和转移的度量,以及利用组合性的能力。我们系统地评估现代神经体系结构,发现令人惊讶的是,在大多数数据制度中,卷积架构在所有性能指标中都超过了基于变压器的体系结构。但是,即使在使用自学意见书学习信息性的视觉表示之后,与人类相比,所有计算模型的数据效率要少得多。总体而言,我们希望我们的挑战能够激发人们对可以学会利用构图朝着更高效学习的神经体系结构发展的兴趣。
translated by 谷歌翻译
人类在解析和灵活地理解复杂的视觉场景的能力方面继续大大胜过现代AI系统。注意力和记忆是已知的两个系统,它们在我们选择性地维护和操纵与行为相关的视觉信息的能力中起着至关重要的作用,以解决一些最具挑战性的视觉推理任务。在这里,我们介绍了一种新颖的体系结构,用于视觉推理的认知科学文献,基于记忆和注意力(视觉)推理(MAREO)架构。 Mareo实例化了一个主动视觉理论,该理论认为大脑通过学习结合以前学习的基本视觉操作以形成更复杂的视觉例程来在构成中解决复杂的视觉推理问题。 Mareo学会通过注意力转移序列来解决视觉推理任务,以路由并通过多头变压器模块将与任务相关的视觉信息保持在存储库中。然后,通过训练有素的专用推理模块来部署视觉例程,以判断场景中对象之间的各种关系。对四种推理任务的实验证明了Mareo以强大和样品有效的方式学习视觉例程的能力。
translated by 谷歌翻译
视觉理解需要了解场景中对象之间的复杂视觉关系。在这里,我们寻求描述抽象视觉推理的计算需求。我们通过系统地评估现代深度卷积神经网络(CNNS)的能力来学习解决“综合视觉推理测试”(SVRT)挑战,是二十三个视觉推理问题的集合。我们的分析揭示了视觉推理任务的新型分类,这可以通过关系类型(相同的与空间关系判断)和用于构成基本规则的关系数量来解释。先前的认知神经科学工作表明,注意力在人类的视觉推理能力中发挥着关键作用。为了测试这一假设,我们将CNN扩展了基于空间和基于特征的注意力机制。在第二系列实验中,我们评估了这些注意网络学习解决SVRT挑战的能力,并发现所产生的架构在解决这些视觉推理任务中最艰难的架构。最重要的是,对个人任务的相应改进部分地解释了我们的新型分类法。总体而言,这项工作提供了视觉推理的粒度计算账户,并产生关于基于特征的与空间关注的差异需求的可测试神经科学预测,具体取决于视觉推理问题的类型。
translated by 谷歌翻译
Prompting large language models has enabled significant recent progress in multi-step reasoning over text. However, when applied to text generation from semi-structured data (e.g., graphs or tables), these methods typically suffer from low semantic coverage, hallucination, and logical inconsistency. We propose MURMUR, a neuro-symbolic modular approach to text generation from semi-structured data with multi-step reasoning. MURMUR is a best-first search method that generates reasoning paths using: (1) neural and symbolic modules with specific linguistic and logical skills, (2) a grammar whose production rules define valid compositions of modules, and (3) value functions that assess the quality of each reasoning step. We conduct experiments on two diverse data-to-text generation tasks like WebNLG and LogicNLG. These tasks differ in their data representations (graphs and tables) and span multiple linguistic and logical skills. MURMUR obtains significant improvements over recent few-shot baselines like direct prompting and chain-of-thought prompting, while also achieving comparable performance to fine-tuned GPT-2 on out-of-domain data. Moreover, human evaluation shows that MURMUR generates highly faithful and correct reasoning paths that lead to 26% more logically consistent summaries on LogicNLG, compared to direct prompting.
translated by 谷歌翻译
Vision transformers (ViTs) have achieved impressive results on various computer vision tasks in the last several years. In this work, we study the capability of frozen ViTs, pretrained only on visual data, to generalize to audio-visual data without finetuning any of its original parameters. To do so, we propose a latent audio-visual hybrid (LAVISH) adapter that adapts pretrained ViTs to audio-visual tasks by injecting a small number of trainable parameters into every layer of a frozen ViT. To efficiently fuse visual and audio cues, our LAVISH adapter uses a small set of latent tokens, which form an attention bottleneck, thus, eliminating the quadratic cost of standard cross-attention. Compared to the existing modality-specific audio-visual methods, our approach achieves competitive or even better performance on various audio-visual tasks while using fewer tunable parameters and without relying on costly audio pretraining or external audio encoders. Our code is available at https://genjib.github.io/project_page/LAVISH/
translated by 谷歌翻译
The last several years have witnessed remarkable progress in video-and-language (VidL) understanding. However, most modern VidL approaches use complex and specialized model architectures and sophisticated pretraining protocols, making the reproducibility, analysis and comparisons of these frameworks difficult. Hence, instead of proposing yet another new VidL model, this paper conducts a thorough empirical study demystifying the most important factors in the VidL model design. Among the factors that we investigate are (i) the spatiotemporal architecture design, (ii) the multimodal fusion schemes, (iii) the pretraining objectives, (iv) the choice of pretraining data, (v) pretraining and finetuning protocols, and (vi) dataset and model scaling. Our empirical study reveals that the most important design factors include: temporal modeling, video-to-text multimodal fusion, masked modeling objectives, and joint training on images and videos. Using these empirical insights, we then develop a step-by-step recipe, dubbed VindLU, for effective VidL pretraining. Our final model trained using our recipe achieves comparable or better than state-of-the-art results on several VidL tasks without relying on external CLIP pretraining. In particular, on the text-to-video retrieval task, our approach obtains 61.2% on DiDeMo, and 55.0% on ActivityNet, outperforming current SOTA by 7.8% and 6.1% respectively. Furthermore, our model also obtains state-of-the-art video question-answering results on ActivityNet-QA, MSRVTT-QA, MSRVTT-MC and TVQA. Our code and pretrained models are publicly available at: https://github.com/klauscc/VindLU.
translated by 谷歌翻译
We propose Universal Document Processing (UDOP), a foundation Document AI model which unifies text, image, and layout modalities together with varied task formats, including document understanding and generation. UDOP leverages the spatial correlation between textual content and document image to model image, text, and layout modalities with one uniform representation. With a novel Vision-Text-Layout Transformer, UDOP unifies pretraining and multi-domain downstream tasks into a prompt-based sequence generation scheme. UDOP is pretrained on both large-scale unlabeled document corpora using innovative self-supervised objectives and diverse labeled data. UDOP also learns to generate document images from text and layout modalities via masked image reconstruction. To the best of our knowledge, this is the first time in the field of document AI that one model simultaneously achieves high-quality neural document editing and content customization. Our method sets the state-of-the-art on 9 Document AI tasks, e.g., document understanding and QA, across diverse data domains like finance reports, academic papers, and websites. UDOP ranks first on the leaderboard of the Document Understanding Benchmark (DUE).
translated by 谷歌翻译
Recent datasets expose the lack of the systematic generalization ability in standard sequence-to-sequence models. In this work, we analyze this behavior of seq2seq models and identify two contributing factors: a lack of mutual exclusivity bias (i.e., a source sequence already mapped to a target sequence is less likely to be mapped to other target sequences), and the tendency to memorize whole examples rather than separating structures from contents. We propose two techniques to address these two issues respectively: Mutual Exclusivity Training that prevents the model from producing seen generations when facing novel, unseen examples via an unlikelihood-based loss; and prim2primX data augmentation that automatically diversifies the arguments of every syntactic function to prevent memorizing and provide a compositional inductive bias without exposing test-set data. Combining these two techniques, we show substantial empirical improvements using standard sequence-to-sequence models (LSTMs and Transformers) on two widely-used compositionality datasets: SCAN and COGS. Finally, we provide analysis characterizing the improvements as well as the remaining challenges, and provide detailed ablations of our method. Our code is available at https://github.com/owenzx/met-primaug
translated by 谷歌翻译
Humans exhibit disagreement during data labeling. We term this disagreement as human label uncertainty. In this work, we study the ramifications of human label uncertainty (HLU). Our evaluation of existing uncertainty estimation algorithms, with the presence of HLU, indicates the limitations of existing uncertainty metrics and algorithms themselves in response to HLU. Meanwhile, we observe undue effects in predictive uncertainty and generalizability. To mitigate the undue effects, we introduce a novel natural scene statistics (NSS) based label dilution training scheme without requiring massive human labels. Specifically, we first select a subset of samples with low perceptual quality ranked by statistical regularities of images. We then assign separate labels to each sample in this subset to obtain a training set with diluted labels. Our experiments and analysis demonstrate that training with NSS-based label dilution alleviates the undue effects caused by HLU.
translated by 谷歌翻译